What Do You Need Artificial Intelligence For?
Artificial intelligence is the umbrella term for technologies that enable machines to solve tasks independently – often inspired by human thinking and learning. Machine learning is a subarea of AI in which algorithms are not programmed rigidly, but learn patterns and relationships from example data. Instead of manually setting each rule, the system “learns” how to translate inputs into outputs itself. Deep learning is a specialized form of machine learning based on artificial neural networks with many processing layers. This architecture makes it possible to detect very complex patterns and deliver precise results even under varying conditions.
What Are the Benefits of Using AI in Quality Control?
| Problem areas of manual control | Solution through advantages of AI-powered image processing |
|---|---|
| Inconsistent assessment of quality | Consistent and repeatable assessment based on large data sets |
| Limited attention span | 24/7 operation without fatigue |
| Time-consuming documentation of decisions | Automatic image storage with heatmap display and score value for verifiability and traceability |
| Higher personnel costs, staff shortages and high training costs | Scalable regardless of staff availability, low entry barrier thanks to less training effort |
|
Problem areas of manual control
|
|||
|---|---|---|---|
|
Inconsistent assessment of quality
|
Limited attention span
|
Time-consuming documentation of decisions
|
Higher personnel costs, staff shortages and high training costs
|
|
Solution through advantages of AI-powered image processing
|
|||
|
Consistent and repeatable assessment based on large data sets
|
24/7 operation without fatigue
|
Automatic image storage with heatmap display and score value for verifiability and traceability
|
Scalable regardless of staff availability, low entry barrier thanks to less training effort
|
When Are Rule-Based and When Are AI Image Processing Systems Used?
Typical Applications
Rule-Based Image Processing h4>
| Measurement, measurement tasks |
| Code reading |
| Precise alignment, positioning (also robot vision, robot guidance) |
Combinations and Overlaps h4>
| Inspection, defect detection |
| Identification (code reading, OCR/character recognition) |
| Localization of objects and features (also robot vision, robot guidance) |
AI-Based Image Processing h4>
| Detection of highly varying objects or errors |
| Challenging OCR (e.g. poor print quality, varying backgrounds) |
| Localization of objects with high variance |
| Classification (e.g. of materials or textures) |
Which AI Technologies Are Used in Image Processing?
Classification
Multi-class: One class per image, e.g. “screw” h4>
Multi-label: Multiple classes per image possible, e.g. “screw”, “nail” h4>
Object Detection
Object detection locates and classifies multiple objects in the image using bounding boxes. For each object found, it specifies which class it belongs to and where exactly it is located in the image. A distinction must be made between “axially parallel” (as shown in the picture) and “oriented” object detection. With oriented object detection, the bounding boxes are aligned with the object and each describe the smallest possible bounding box.
Segmentation
Assigns a class to each individual pixel in the image for the exact delimitation of objects (e.g. nail, screw, background) or defects (e.g. paint defects).
Note: In addition to common AI models, there are increasingly AI models trained for a specific use case, such as Deep OCR. Optical character recognition using Deep OCR uses neural networks trained on large amounts of text images to extract letters and numbers. Unlike traditional OCR, it enables precise recognition of dynamic text with variable font sizes and different backgrounds, even with specifically designed or damaged prints and labels.
What Is an AI Model?
An AI model is built up in layers: The “input layer” receives the raw data (e.g. images). In “hidden layers”, features are automatically detected and the “output layer” makes a decision based on this.
During training, the AI model compares its predictions with the ground truth and adjusts the weightings step by step. This learning process is repeated across many examples until the AI model reliably recognizes patterns.
Not every AI model is a neural network. The term “AI model” is a generic term for many types of algorithms, including decision trees, statistical models and neural networks. The latter are a form of AI models that are particularly suitable for complex tasks such as image recognition or speech processing. However, the terms “neural network” and “AI model” are often used synonymously.
ONNX – the Universal Exchange Format
The wenglor AI Loop – How AI Works in Industrial Image Processing
What Is Important when Creating Suitable Data Sets?
A higher resolution shows more details, but requires longer training times and higher resources. The data set images are often reduced for training, e.g. to 320 × 320 pixels (AI input image).
Important: The key feature must also be clearly recognizable in this reduced resolution. What is visible to the human eye can usually also be captured by the AI model.
Important: Augmentation must remain realistic and application-oriented, as its use has a major impact on the balanced accuracy of the AI model. For example, rotation could be an error and is therefore not suitable for every application.
How Can the Effort of the Label Process Be Minimized?
Tip: Boundary samples should be specifically marked as such using tags. This information can therefore be included in the subsequent validation of the network.
What Must Be Observed when Training an AI Model?
Tip: Multiple training of the AI model with the identical data set can lead to different performance values. Differences above 5% indicate an inconsistent data set.
The higher the resolution,
- the longer the evaluation time,
- the higher requirements are placed on the RAM for interference (execution),
- the longer the training takes,
- the more training data is needed to achieve the same balanced accuracy.
Tips and Tricks on the Traceability of the AI Model
- Training data is used to train the AI model and typically accounts for 70–80% of the data.
- Validation data is used during training to reconcile weightings and check if the AI model is overfitting. They typically account for 10–20% of the data.
- Test data is only used to evaluate the final quality of the AI model and accounts for 10–20% of the data.
| Features | Hold-out validation | K-Fold cross validation |
|---|---|---|
| Description | Data set is split once, e.g. 80% training/20% test | Data set is divided into k parts, AI model is evaluated k times, each with different test data |
| Benefits |
|
|
| Disadvantages |
|
|
How Can AI Be Implemented in the Image Processing Application?
- New classes appear that need to be detected.
- The score values decrease, e.g. due to batch changes, contamination or wear of workpiece carriers or reduced light output.
- Requirements for balanced accuracy are changing.
Comparison of Three Basic Approaches for Training AI Models
| Cloud (e.g. AI Lab) | On-premise | Edge | |
|---|---|---|---|
| Features | Externally hosted cloud | Enterprise cloud, server, local PC | Local, directly on the device in production |
| Complexity of the application | Simple to complex | Individual | Simple |
| Cost control | Pay-per-use, no acquisition costs for training hardware | Investments in hardware and operating costs | No additional costs (runs on smart device) |
| Setup and access | No special setup required, accessible with Internet connection via browser | Installation and setup of suitable training hardware required | Runs directly on the hardware (e.g. smart camera), browser/3rd party software required |
| Training flexibility | High flexibility from simple to complex AI models and small to large data sets
|
Responsibility lies with the operator | Training flexibility is often limited
|
| Validability and traceability | Statistically robust validation based on large amounts of data, stored centrally | Statistically robust validation based on large amounts of data, storage is the responsibility of the user | Simple manual function test using individual parts |
| Collaboration and data set management | Centrally manageable and team-oriented | Depending on setup | Standalone solution only, no real collaboration or role-based control |
| Scalability | Automatically scalable via cloud server | Depending on the operator (storage, computing power, software solution) | Can only be scaled by purchasing additional edge devices |
| Availability and deployment | Centrally accessible from any device Scalable deployment: Rollout of AI models across multiple lines, locations or regions |
Available locally or within the network | Decentralized, available on local hardware or in the network (offline) |
| Data security, access control and backups | Dependent on cloud provider, centrally with user roles and backups | Depending on the setup, is the responsibility of the user | Local data storage is secure, but without central user management or automatic backups |
|
Features
|
||
|
Cloud (e.g. AI Lab)
Externally hosted cloud
|
On-premise
Enterprise cloud, server, local PC
|
Edge
Local, directly on the device in production
|
|
Complexity of the application
|
||
|
Cloud (e.g. AI Lab)
Simple to complex
|
On-premise
Individual
|
Edge
Simple
|
|
Cost control
|
||
|
Cloud (e.g. AI Lab)
Pay-per-use, no acquisition costs for training hardware
|
On-premise
Investments in hardware and operating costs
|
Edge
No additional costs (runs on smart device)
|
|
Setup and access
|
||
|
Cloud (e.g. AI Lab)
No special setup required, accessible with Internet connection via browser
|
On-premise
Installation and setup of suitable training hardware required
|
Edge
Runs directly on the hardware (e.g. smart camera), browser/3rd party software required
|
|
Training flexibility
|
||
|
Cloud (e.g. AI Lab)
High flexibility from simple to complex AI models and small to large data sets
|
On-premise
Responsibility lies with the operator
|
Edge
Training flexibility is often limited
|
|
Validability and traceability
|
||
|
Cloud (e.g. AI Lab)
Statistically robust validation based on large amounts of data, stored centrally
|
On-premise
Statistically robust validation based on large amounts of data, storage is the responsibility of the user
|
Edge
Simple manual function test using individual parts
|
|
Collaboration and data set management
|
||
|
Cloud (e.g. AI Lab)
Centrally manageable and team-oriented
|
On-premise
Depending on setup
|
Edge
Standalone solution only, no real collaboration or role-based control
|
|
Scalability
|
||
|
Cloud (e.g. AI Lab)
Automatically scalable via cloud server
|
On-premise
Depending on the operator (storage, computing power, software solution)
|
Edge
Can only be scaled by purchasing additional edge devices
|
|
Availability and deployment
|
||
|
Cloud (e.g. AI Lab)
Centrally accessible from any device
Scalable deployment: Rollout of AI models across multiple lines, locations or regions |
On-premise
Available locally or within the network
|
Edge
Decentralized, available on local hardware or in the network (offline)
|
|
Data security, access control and backups
|
||
|
Cloud (e.g. AI Lab)
Dependent on cloud provider, centrally with user roles and backups
|
On-premise
Depending on the setup, is the responsibility of the user
|
Edge
Local data storage is secure, but without central user management or automatic backups
|